103 research outputs found

    Impact of Big Data over Telecom Industry

    Get PDF
    During past few years, data is growing exponentially attracting researchers to work a popular term, the Big Data. Big Data is observed in various fields, such as information technology, telecommunication, theoretical computing, mathematics, data mining and data warehousing. Data science is frequently referred with Big Data as it uses methods to scale down the Big Data. Currentlymore than 3.2 billion of the world population is connected to internet out of which 46% are connected via smart phones. Over 5.5 billion people are using cell phones. As technology is rapidly shifting from ordinary cell phones towards smart phones, therefore proportion of using internet is also growing. Thereis a forecast that by 2020 around 7 billion people at the globe will be using internet out of which 52% will be using their smart phones to connect. In year 2050 that figure will be touching 95% of world population. Every device connect to internet generates data. As majority of the devices are using smart phones togenerate this data by using applications such as Instagram, WhatsApp, Apple, Google, Google+, Twitter, Flickr etc., therefore this huge amount of data is becoming a big threat for telecom sector. This paper is giving a comparison of amount of Big Data generated by telecom industry. Based on the collected datawe use forecasting tools to predict the amount of Big Data will be generated in future and also identify threats that telecom industry will be facing from that huge amount of Big Data

    Organizational Learning and Hotel Performance: The Role of Capabilities’ Hierarchy

    Get PDF
    Building on the capabilities' hierarchy concept, a model of the effect of organizational learning on hotel performance is proposed and tested in this study. Data was collected from 240 managers in the hotel industry of United Kingdom and Pakistan via survey. The results revealed strong direct interrelation between different level of capabilities and an indirect relation between organizational learning and performance through these capabilities. This paper makes theoretical contributions to both management and hospitality and tourism research by generating an integrative and unifying framework for an organizational learning performance relationship, clarifying capabilities interrelationships and empirically revealing the exact way these capabilities enhance performance. Also, it has practical implications for hotel managers' understanding on the development and use of capabilities as a hierarchy in enhancing their hotel performance

    Conventional Weight-Based versus Low-Dose Regimen of Heparin Administration to Achieve Target Activated Clotting Time on Cardiopulmonary Bypass in Pakistani Population

    Get PDF
    Objective: A weight-based dose of heparin is calculated to achieve target ACT (Activated clotting time) for establishing CPB (cardiopulmonary bypass). Whether a target ACT can be achieved with lower dose of heparin in Pakistani population was the aim of this study. Methodology: The cross-sectional comparative study was conducted at Rawalpindi Institute of Cardiology, Department of Cardiac Surgery from 1st January 2019 to 1st January 2020. Three hundred thirty-six (336) patients undergoing elective open-heart surgeries on CPB were included in this study. Patients receiving weight-based heparin dose were placed in Group-A, while those on low-dose heparin were placed in Group-B. ACT was considered to have reached the target value in range of 400-480 seconds, values between 481-1500 seconds were considered excessive, whereas ACT of >1500 was regarded as potentially high-risk for peri-operative bleeding . Results: 14.1% (n= 28) of Group-A patients achieved target ACT, whereas 58.3% (n=116) exceeded the target of 480. In 25.1% (n=50), ACT values were beyond the measuring capacity of the assay machine i.e. >1500. Only 2.5% (n=5) required additional dosage of heparin. Target ACT in Group B was achieved in 19.7% (n= 27), 55.5% (n=76) had excessive ACT values, whereas in 16.8% (n= 23), it was >1500. 9.5% (n=13) required an additional dosage of Heparin. Conclusion: In Pakistani population, a target ACT can be achieved with significantly lower dose than the conventional weight-based heparin dose. Larger studies, preferably randomized controlled trials are needed to determine the optimal heparin dose calculation for safe anti-coagulation during CPB

    Portfolio Diversification and Oil Price Shocks: A Sector Wide Analysis

    Get PDF
    This paper investigates the time-varying relationship between the oil price and disaggregated stock market of India using Dynamic conditional correlation multivariate GARCH and Continuous Wavelet Transformation modelling approaches. Our findings reveal the evolving relationship between the oil price and disaggregated stock market. The correlations are generally volatile before the 2007-08 crisis but since then the correlations are positive implying no diversification benefits for the investors during rising oil prices. As emerging markets in general, and India in particular, is expected to increase its share of oil consumption in the world's energy market, therefore for the stock market to grow, especially the oil-intensive industries, we recommend the government should increase its reliance on alternative energy resources. Furthermore, as rising oil prices can also have its adverse effect through exchange rate channel, we suggest the monetary policies should be time varying to manage the oil inflationary pressures arising out of extreme volatility in the oil prices. Keywords: DCC-GARCH, CWT, Disaggregated stock market, India, Oil price shocks, Diversification. JEL Classifications: C50; G10; O53; Q43 DOI: https://doi.org/10.32479/ijeep.774

    Fuzziness-based active learning framework to enhance hyperspectral image classification performance for discriminative and generative classifiers

    Get PDF
    © 2018 Ahmad et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF), in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN). The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods

    OS2: Oblivious similarity based searching for encrypted data outsourced to an untrusted domain

    Get PDF
    © 2017 Pervez et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Public cloud storage services are becoming prevalent and myriad data sharing, archiving and collaborative services have emerged which harness the pay-as-you-go business model of public cloud. To ensure privacy and confidentiality often encrypted data is outsourced to such services, which further complicates the process of accessing relevant data by using search queries. Search over encrypted data schemes solve this problem by exploiting cryptographic primitives and secure indexing to identify outsourced data that satisfy the search criteria. Almost all of these schemes rely on exact matching between the encrypted data and search criteria. A few schemes which extend the notion of exact matching to similarity based search, lack realism as those schemes rely on trusted third parties or due to increase storage and computational complexity. In this paper we propose Oblivious Similarity based Search (OS2) for encrypted data. It enables authorized users to model their own encrypted search queries which are resilient to typographical errors. Unlike conventional methodologies, OS2 ranks the search results by using similarity measure offering a better search experience than exact matching. It utilizes encrypted bloom filter and probabilistic homomorphic encryption to enable authorized users to access relevant data without revealing results of search query evaluation process to the untrusted cloud service provider. Encrypted bloom filter based search enables OS2 to reduce search space to potentially relevant encrypted data avoiding unnecessary computation on public cloud. The efficacy of OS2 is evaluated on Google App Engine for various bloom filter lengths on different cloud configurations

    Resolving data interoperability in ubiquitous health profile using semi-structured storage and processing

    Get PDF
    © 2019 Association for Computing Machinery. Advancements in the field of healthcare information management have led to the development of a plethora of software, medical devices and standards. As a consequence, the rapid growth in quantity and quality of medical data has compounded the problem of heterogeneity; thereby decreasing the effectiveness and increasing the cost of diagnostics, treatment and follow-up. However, this problem can be resolved by using a semi-structured data storage and processing engine, which can extract semantic value from a large volume of patient data, produced by a variety of data sources, at variable rates and conforming to different abstraction levels. Going beyond the traditional relational model and by re-purposing state-of-the-art tools and technologies, we present, the Ubiquitous Health Profile (UHPr), which enables a semantic solution to the data interoperability problem, in the domain of healthcare

    Entropy based features distribution for anti-ddos model in SDN

    Get PDF
    In modern network infrastructure, Distributed Denial of Service (DDoS) attacks are considered as severe network security threats. For conventional network security tools it is extremely difficult to distinguish between the higher traffic volume of a DDoS attack and large number of legitimate users accessing a targeted network service or a resource. Although these attacks have been widely studied, there are few works which collect and analyse truly representative characteristics of DDoS traffic. The current research mostly focuses on DDoS detection and mitigation with predefined DDoS data-sets which are often hard to generalise for various network services and legitimate users’ traffic patterns. In order to deal with considerably large DDoS traffic flow in a Software Defined Networking (SDN), in this work we proposed a fast and an effective entropy-based DDoS detection. We deployed generalised entropy calculation by combining Shannon and Renyi entropy to identify distributed features of DDoS traffic—it also helped SDN controller to effectively deal with heavy malicious traffic. To lower down the network traffic overhead, we collected data-plane traffic with signature-based Snort detection. We then analysed the collected traffic for entropy-based features to improve the detection accuracy of deep learning models: Stacked Auto Encoder (SAE) and Convolutional Neural Network (CNN). This work also investigated the trade-off between SAE and CNN classifiers by using accuracy and false-positive results. Quantitative results demonstrated SAE achieved relatively higher detection accuracy of 94% with only 6% of false-positive alerts, whereas the CNN classifier achieved an average accuracy of 93%
    • …
    corecore